We introduce an optimal transport-based model for learning a metric tensor from cross-sectional samples of evolving probability measures on a common Riemannian manifold. We neurally parametrize the metric as a spatially-varying matrix field and efficiently optimize our model's objective using a simple alternating scheme. Using this learned metric, we can nonlinearly interpolate between probability measures and compute geodesics on the manifold. We show that metrics learned using our method improve the quality of trajectory inference on scRNA and bird migration data at the cost of little additional cross-sectional data.
translated by 谷歌翻译
从模型分析和机器学习中的比较到医疗数据集集合中的趋势发现,需要有效地比较和表示具有未知字段的数据集跨越各个字段。我们使用歧管学习来比较不同数据集的固有几何结构,通过比较其扩散操作员,对称阳性定义(SPD)矩阵,这些矩阵与连续的拉普拉斯 - 贝特拉米操作员与离散样品的近似相关。现有方法通常假设已知的数据对齐,并以点数的方式比较此类运算符。取而代之的是,我们利用SPD矩阵的Riemannian几何形状比较了这些操作员并根据log-euclidean Metric的下限定义了新的理论动机距离。我们的框架有助于比较具有不同大小,功能数量和测量方式的数据集中表达的数据歧管的比较。我们的日志 - 欧几里德签名(LES)距离恢复了有意义的结构差异,在各种应用领域的表现都优于竞争方法。
translated by 谷歌翻译
大部分计算机生成的动画是通过用钻机来操纵网格创建的。尽管这种方法可以很好地对动物(例如动物)进行动画化的态度,但它的灵活性有限,可以使结构较低的自由形式对象进行动画化。我们介绍了WaseSplines,这是一种基于连续标准化流量和最佳运输的最新进展,用于对非结构化密度进行动画化的新型推理方法。关键思想是训练代表密钥帧之间运动的神经参数化速度场。然后,通过通过速度字段推进密钥帧来计算轨迹。我们解决了另一个Wasserstein Barycenter插值问题,以确保严格遵守关键框架。我们的工具可以通过各种基于PDE的正规化器来对轨迹进行风格化轨迹,从而创造出不同的视觉效果。我们在各种关键框架插值问题上演示了我们的工具,以制作时间连接动画而无需嵌入或索具。
translated by 谷歌翻译
最近的技术在将表面重建为由深神经网络参数化的学习函数(如签名距离字段)的级别集。但是,许多这些方法仅限于闭合表面,并且无法重建具有边界曲线的形状。我们提出了一种混合形状表示,其将明确的边界曲线与隐式学习内部结合起来。使用从几何测量理论中的机器,我们使用深网络参数化电流,并使用随机梯度下降来解决最小的表面问题。通过根据目标几何形状修改度量,例如,从网格或点云,我们可以使用这种方法来表示任意曲面,学习隐式定义的具有明确定义的边界曲线的形状。我们进一步展示了由边界曲线和潜在码共同参数化的形状的学习系列。
translated by 谷歌翻译
我们提出了一种基于体积的基于网格的算法,用于参数化胎盘到扁平模板,以实现局部解剖结构和功能的有效可视化。 MRI显示潜在作为研究工具,因为它提供与胎盘功能直接相关的信号。然而,由于胎盘体内形状的弯曲和高度变化,解释和可视化这些图像是困难的。我们通过绘制胎盘来解决解释挑战,以便它类似于熟悉的离体形状。我们将参数化作为优化问题,用于将体积网格表示的胎盘形状映射到扁平模板。我们采用对称的Dirichlet Energy来控制整个体积的局部变形。在梯度下降优化期间,映射中的局部注射是由约束的线路搜索强制执行的。我们使用从大胆的MRI图像中提取的111个胎盘形状的研究研究验证了我们的方法。我们的映射在匹配模板时实现了子体素准确性,同时保持整个音量的低失真。我们展示了胎盘的扁平化程度如何改善解剖学和功能的可视化。我们的代码在https://github.com/mabulnaga/plentaa-flatteny自由提供。
translated by 谷歌翻译
Point cloud registration is a key problem for computer vision applied to robotics, medical imaging, and other applications. This problem involves finding a rigid transformation from one point cloud into another so that they align. Iterative Closest Point (ICP) and its variants provide simple and easily-implemented iterative methods for this task, but these algorithms can converge to spurious local optima.To address local optima and other difficulties in the ICP pipeline, we propose a learning-based method, titled Deep Closest Point (DCP), inspired by recent techniques in computer vision and natural language processing. Our model consists of three parts: a point cloud embedding network, an attention-based module combined with a pointer generation layer, to approximate combinatorial matching, and a differentiable singular value decomposition (SVD) layer to extract the final rigid transformation. We train our model end-to-end on the ModelNet40 dataset and show in several settings that it performs better than ICP, its variants (e.g., Go-ICP, FGR), and the recently-proposed learning-based method PointNetLK. Beyond providing a state-of-the-art registration technique, we evaluate the suitability of our learned features transferred to unseen objects. We also provide preliminary analysis of our learned model to help understand whether domain-specific and/or global features facilitate rigid registration.
translated by 谷歌翻译
Participants in political discourse employ rhetorical strategies -- such as hedging, attributions, or denials -- to display varying degrees of belief commitments to claims proposed by themselves or others. Traditionally, political scientists have studied these epistemic phenomena through labor-intensive manual content analysis. We propose to help automate such work through epistemic stance prediction, drawn from research in computational semantics, to distinguish at the clausal level what is asserted, denied, or only ambivalently suggested by the author or other mentioned entities (belief holders). We first develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling. Then we demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books, where we characterize trends in cited belief holders -- respected allies and opposed bogeymen -- across U.S. political ideologies.
translated by 谷歌翻译
3D shapes have complementary abstractions from low-level geometry to part-based hierarchies to languages, which convey different levels of information. This paper presents a unified framework to translate between pairs of shape abstractions: $\textit{Text}$ $\Longleftrightarrow$ $\textit{Point Cloud}$ $\Longleftrightarrow$ $\textit{Program}$. We propose $\textbf{Neural Shape Compiler}$ to model the abstraction transformation as a conditional generation process. It converts 3D shapes of three abstract types into unified discrete shape code, transforms each shape code into code of other abstract types through the proposed $\textit{ShapeCode Transformer}$, and decodes them to output the target shape abstraction. Point Cloud code is obtained in a class-agnostic way by the proposed $\textit{Point}$VQVAE. On Text2Shape, ShapeGlot, ABO, Genre, and Program Synthetic datasets, Neural Shape Compiler shows strengths in $\textit{Text}$ $\Longrightarrow$ $\textit{Point Cloud}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Text}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Program}$, and Point Cloud Completion tasks. Additionally, Neural Shape Compiler benefits from jointly training on all heterogeneous data and tasks.
translated by 谷歌翻译
In order for artificial neural networks to begin accurately mimicking biological ones, they must be able to adapt to new exigencies without forgetting what they have learned from previous training. Lifelong learning approaches to artificial neural networks attempt to strive towards this goal, yet have not progressed far enough to be realistically deployed for natural language processing tasks. The proverbial roadblock of catastrophic forgetting still gate-keeps researchers from an adequate lifelong learning model. While efforts are being made to quell catastrophic forgetting, there is a lack of research that looks into the importance of class ordering when training on new classes for incremental learning. This is surprising as the ordering of "classes" that humans learn is heavily monitored and incredibly important. While heuristics to develop an ideal class order have been researched, this paper examines class ordering as it relates to priming as a scheme for incremental class learning. By examining the connections between various methods of priming found in humans and how those are mimicked yet remain unexplained in life-long machine learning, this paper provides a better understanding of the similarities between our biological systems and the synthetic systems while simultaneously improving current practices to combat catastrophic forgetting. Through the merging of psychological priming practices with class ordering, this paper is able to identify a generalizable method for class ordering in NLP incremental learning tasks that consistently outperforms random class ordering.
translated by 谷歌翻译